Lecture 1: Entropy, Convexity, and Matrix Scaling
نویسنده
چکیده
Moreover, if p , q, then the inequality is strict. A proof: The map u 7→ −u log u is strictly concave on [0, 1]; this follows from the fact that its derivative −(1 + log u) is strictly decreasing on [0, 1]. Now, a sum of concave functions is concave, so we conclude that H is concave. Moreover, if p , q, then they differ in some coordinate; strict concavity of the map u 7→ −u log u applied to that coordinate yields strong concavity of H.
منابع مشابه
Lecture 2: Multiplicative Weights and Mirror Descent
In the last lecture, we considered thematrix scaling problem: Given non-negativematrices X, T ∈ n×n + , our goal was to find non-negative diagonal matrices D1 ,D2 so that D1XD2 had the same row and column sums as the target matrix T. In other words, we sought to weight the rows and columns of X by positive numbers in order to achieve this. We used entropy optimization to prove the following th...
متن کاملRicci curvature, entropy and optimal transport
This is the lecture notes on the interplay between optimal transport and Riemannian geometry. On a Riemannian manifold, the convexity of entropy along optimal transport in the space of probability measures characterizes lower bounds of the Ricci curvature. We then discuss geometric properties of general metric measure spaces satisfying this convexity condition. Mathematics Subject Classificatio...
متن کاملECE 587 / STA 563 : Lecture 2 – Measures of Information
2.1 Quantifying Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2.2 Entropy and Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2.1 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2.2 Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2.3 Example: Test...
متن کاملGeodesic convexity of the relative entropy in reversible Markov chains
We consider finite-dimensional, time-continuous Markov chains satisfying the detailed balance condition as gradient systems with the relative entropy E as driving functional. The Riemannian metric is defined via its inverse matrix called the Onsager matrix K . We provide methods for establishing geodesic λ-convexity of the entropy and treat several examples including some more general nonlinear...
متن کاملNotes on Free Probability Theory
Lecture 1. Free Independence and Free Harmonic Analysis. 2 1.1. Probability spaces. 2 1.2. Non-commutative probability spaces. 3 1.3. Classical independence. 4 1.4. Free products of non-commutative probability spaces. 5 1.5. Free Fock space. 6 1.6. Free Central Limit Theorem. 8 1.7. Free Harmonic Analysis. 10 1.8. Further topics. 16 Lecture 2. Random Matrices and Free Probability. 17 2.1. Rando...
متن کامل